234 research outputs found

    A Universal Scheme for Transforming Binary Algorithms to Generate Random Bits from Loaded Dice

    Get PDF
    In this paper, we present a universal scheme for transforming an arbitrary algorithm for biased 2-face coins to generate random bits from the general source of an m-sided die, hence enabling the application of existing algorithms to general sources. In addition, we study approaches of efficiently generating a prescribed number of random bits from an arbitrary biased coin. This contrasts with most existing works, which typically assume that the number of coin tosses is fixed, and they generate a variable number of random bits.Comment: 2 columns, 10 page

    On the expressibility of stochastic switching circuits

    Get PDF
    Stochastic switching circuits are relay circuits that consist of stochastic switches (that we call pswitches). We study the expressive power of these circuits; in particular, we address the following basic question: given an arbitrary integer q, and a pswitch set {1/q, 2/q, ..., q-1/q}, can we realize any rational probability with denominator q^n (for arbitrary n) by a simple series-parallel stochastic switching circuit? In this paper, we generalized previous results and prove that when q is a multiple of 2 or 3 the answer is positive. We also show that when q is a prime number the answer is negative. In addition, we prove that any desired probability can be approximated well by a linear in n size circuit, with error less than q^(-n)

    Efficiently Extracting Randomness from Imperfect Stochastic Processes

    Get PDF
    We study the problem of extracting a prescribed number of random bits by reading the smallest possible number of symbols from non-ideal stochastic processes. The related interval algorithm proposed by Han and Hoshi has asymptotically optimal performance; however, it assumes that the distribution of the input stochastic process is known. The motivation for our work is the fact that, in practice, sources of randomness have inherent correlations and are affected by measurement's noise. Namely, it is hard to obtain an accurate estimation of the distribution. This challenge was addressed by the concepts of seeded and seedless extractors that can handle general random sources with unknown distributions. However, known seeded and seedless extractors provide extraction efficiencies that are substantially smaller than Shannon's entropy limit. Our main contribution is the design of extractors that have a variable input-length and a fixed output length, are efficient in the consumption of symbols from the source, are capable of generating random bits from general stochastic processes and approach the information theoretic upper bound on efficiency.Comment: 2 columns, 16 page

    Linear Transformations for Randomness Extraction

    Get PDF
    Information-efficient approaches for extracting randomness from imperfect sources have been extensively studied, but simpler and faster ones are required in the high-speed applications of random number generation. In this paper, we focus on linear constructions, namely, applying linear transformation for randomness extraction. We show that linear transformations based on sparse random matrices are asymptotically optimal to extract randomness from independent sources and bit-fixing sources, and they are efficient (may not be optimal) to extract randomness from hidden Markov sources. Further study demonstrates the flexibility of such constructions on source models as well as their excellent information-preserving capabilities. Since linear transformations based on sparse random matrices are computationally fast and can be easy to implement using hardware like FPGAs, they are very attractive in the high-speed applications. In addition, we explore explicit constructions of transformation matrices. We show that the generator matrices of primitive BCH codes are good choices, but linear transformations based on such matrices require more computational time due to their high densities.Comment: 2 columns, 14 page

    The robustness of stochastic switching networks

    Get PDF
    Many natural systems, including chemical and biological systems, can be modeled using stochastic switching circuits. These circuits consist of stochastic switches, called pswitches, which operate with a fixed probability of being open or closed. We study the effect caused by introducing an error of size. to each pswitch in a stochastic circuit. We analyze two constructions--simple series-parallel and general series-parallel circuits--and prove that simple series-parallel circuits are robust to small error perturbations, while general series-parallel circuits are not. Specifically, the total error introduced by perturbations of size less than ε is bounded by a constant multiple of ε in a simple series-parallel circuit, independent of the size of the circuit. However, the same result does not hold in the case of more general series-parallel circuits. In the case of a general stochastic circuit, we prove that the overall error probability is bounded by a linear function of the number of pswitches

    Activation Learning by Local Competitions

    Full text link
    Despite its great success, backpropagation has certain limitations that necessitate the investigation of new learning methods. In this study, we present a biologically plausible local learning rule that improves upon Hebb's well-known proposal and discovers unsupervised features by local competitions among neurons. This simple learning rule enables the creation of a forward learning paradigm called activation learning, in which the output activation (sum of the squared output) of the neural network estimates the likelihood of the input patterns, or "learn more, activate more" in simpler terms. For classification on a few small classical datasets, activation learning performs comparably to backpropagation using a fully connected network, and outperforms backpropagation when there are fewer training samples or unpredictable disturbances. Additionally, the same trained network can be used for a variety of tasks, including image generation and completion. Activation learning also achieves state-of-the-art performance on several real-world datasets for anomaly detection. This new learning paradigm, which has the potential to unify supervised, unsupervised, and semi-supervised learning and is reasonably more resistant to adversarial attacks, deserves in-depth investigation.Comment: Updated Equation (13) for the modification rule with feedback; Adding discussions regarding activation learning for anormaly detectio

    Synthesis of Stochastic Flow Networks

    Get PDF
    A stochastic flow network is a directed graph with incoming edges (inputs) and outgoing edges (outputs), tokens enter through the input edges, travel stochastically in the network, and can exit the network through the output edges. Each node in the network is a splitter, namely, a token can enter a node through an incoming edge and exit on one of the output edges according to a predefined probability distribution. Stochastic flow networks can be easily implemented by DNA-based chemical reactions, with promising applications in molecular computing and stochastic computing. In this paper, we address a fundamental synthesis question: Given a finite set of possible splitters and an arbitrary rational probability distribution, design a stochastic flow network, such that every token that enters the input edge will exit the outputs with the prescribed probability distribution. The problem of probability transformation dates back to von Neumann's 1951 work and was followed, among others, by Knuth and Yao in 1976. Most existing works have been focusing on the "simulation" of target distributions. In this paper, we design optimal-sized stochastic flow networks for "synthesizing" target distributions. It shows that when each splitter has two outgoing edges and is unbiased, an arbitrary rational probability \frac{a}{b} with a\leq b\leq 2^n can be realized by a stochastic flow network of size n that is optimal. Compared to the other stochastic systems, feedback (cycles in networks) strongly improves the expressibility of stochastic flow networks.Comment: 2 columns, 15 page

    The Synthesis and Analysis of Stochastic Switching Circuits

    Get PDF
    Stochastic switching circuits are relay circuits that consist of stochastic switches called pswitches. The study of stochastic switching circuits has widespread applications in many fields of computer science, neuroscience, and biochemistry. In this paper, we discuss several properties of stochastic switching circuits, including robustness, expressibility, and probability approximation. First, we study the robustness, namely, the effect caused by introducing an error of size \epsilon to each pswitch in a stochastic circuit. We analyze two constructions and prove that simple series-parallel circuits are robust to small error perturbations, while general series-parallel circuits are not. Specifically, the total error introduced by perturbations of size less than \epsilon is bounded by a constant multiple of \epsilon in a simple series-parallel circuit, independent of the size of the circuit. Next, we study the expressibility of stochastic switching circuits: Given an integer q and a pswitch set S=\{\frac{1}{q},\frac{2}{q},...,\frac{q-1}{q}\}, can we synthesize any rational probability with denominator q^n (for arbitrary n) with a simple series-parallel stochastic switching circuit? We generalize previous results and prove that when q is a multiple of 2 or 3, the answer is yes. We also show that when q is a prime number larger than 3, the answer is no. Probability approximation is studied for a general case of an arbitrary pswitch set S=\{s_1,s_2,...,s_{|S|}\}. In this case, we propose an algorithm based on local optimization to approximate any desired probability. The analysis reveals that the approximation error of a switching circuit decreases exponentially with an increasing circuit size.Comment: 2 columns, 15 page
    • …
    corecore